32 research outputs found

    Adaptive saccade controller inspired by the primates' cerebellum

    Get PDF
    Saccades are fast eye movements that allow humans and robots to bring the visual target in the center of the visual field. Saccades are open loop with respect to the vision system, thus their execution require a precise knowledge of the internal model of the oculomotor system. In this work, we modeled the saccade control, taking inspiration from the recurrent loops between the cerebellum and the brainstem. In this model, the brainstem acts as a fixed-inverse model of the oculomotor system, while the cerebellum acts as an adaptive element that learns the internal model of the oculomotor system. The adaptive filter is implemented using a state-of-the-art neural network, called I-SSGPR. The proposed approach, namely recurrent architecture, was validated through experiments performed both in simulation and on an antropomorphic robotic head. Moreover, we compared the recurrent architecture with another model of the cerebellum, the feedback error learning. Achieved results show that the recurrent architecture outperforms the feedback error learning in terms of accuracy and insensitivity to the choice of the feedback controller

    Learning the visual–oculomotor transformation: effects on saccade control and space representation

    Get PDF
    Active eye movements can be exploited to build a visuomotor representation of the surrounding environment. Maintaining and improving such representation requires to update the internal model involved in the generation of eye movements. From this perspective, action and perception are thus tightly coupled and interdependent. In this work, we encoded the internal model for oculomotor control with an adaptive filter inspired by the functionality of the cerebellum. Recurrent loops between a feed-back controller and the internal model allow our system to perform accurate binocular saccades and create an implicit representation of the nearby space. Simulation results show that this recurrent architecture outperforms classical feedback-error-learning in terms of both accuracy and sensitivity to system parameters. The proposed approach was validated implementing the framework on an anthropomorphic robotic head

    A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

    Get PDF
    Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors

    The robot programming network

    Get PDF
    The Robot Programming Network (RPN) is an initiative for creating a network of robotics education laboratories with remote programming capabilities. It consists of both online open course materials and online servers that are ready to execute and test the programs written by remote students. Online materials include introductory course modules on robot programming, mobile robotics and humanoids, aimed to learn from basic concepts in science, technology, engineering, and mathematics (STEM) to more advanced programming skills. The students have access to the online server hosts, where they submit and run their programming code on the fly. The hosts run a variety of robot simulation environments, and access to real robots can also be granted, upon successful achievement of the course modules. The learning materials provide step-by-step guidance for solving problems with increasing level of dif- ficulty. Skill tests and challenges are given for checking the success, and online competitions are scheduled for additional motivation and fun. Use of standard robotics middleware (ROS) allows the system to be extended to a large number of robot platforms, and connected to other existing tele-laboratories for building a large social network for online teaching of robotics.Support of IEEE RAS through the CEMRA program (Creation of Educational Material for Robotics and Automation) is gratefully acknowledged. This paper describes research done at the Robotic Intelligence Laboratory. Support for this laboratory is provided in part by Ministerio de Economia y Competitividad (DPI2011-27846), by Generalitat Valenciana (PROMETEOII/2014/028) and by Universitat Jaume I (P1-1B2011-54)

    The neuroscience of vision-based grasping: a functional review for computational modeling and bio-inspired robotics

    Get PDF
    The topic of vision-based grasping is being widely studied using various techniques and with different goals in humans and in other primates. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved in them is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic application

    Interpretable networks with BP-SOM

    No full text

    Downloaded from

    No full text
    Neither of the classical visual servoing approaches, position-based and image-based, are completely satisfactory. In position-based visual servoing the trajectory of the robot is well stated, but the approach suffers mainly from the image features going out of the visual field of the cameras. On the other hand, image-based visual servoing has been found generally satisfactory and robust in the presence of camera and hand–eye calibration errors. However, in some cases, singularities and local minima may arise, and the robot can go further from its joint limits. This paper is a step towards the synthesis of both approaches with their particular advantages, i.e., the trajectory of the camera motion is predictable and the image features remain in the field of view of the camera. The basis is the introduction of three-dimensional information in the feature vector. Point depth and object pose produce useful behavior in the control of the camera

    Qualitative Theory of Shape and Structure

    No full text

    The Dorso-medial visual stream: from Neural Activation to Sensorimotor Interaction

    No full text
    The posterior parietal cortex of primates, and more exactly areas of the dorso-medial visual stream, are able to encode the peripersonal space of a subject in a way suitable for gathering visual information and contextually performing purposeful gazing and arm reaching movements. Such sensorimotor knowledge of the environment is not explicit, but rather emerges through the interaction of the subject with nearby objects. In this work, single-cell data regarding the activation of primate dorso-medial stream neurons during gazing and reaching movements is studied, with the purpose of discovering meaningful pattern useful for modeling purposes. The outline of a model of the mechanisms which allow humans and other primates to build dynamical representations of their peripersonal space through active interaction with nearby objects is proposed, and a detailed description of how to employ the results of the data analysis in the model is offered. The application of the model to robotic systems will allow artificial agents to improve their skills in exploring the nearby space, and will at the same time constitute a way to validate modeling assumptions
    corecore